145 research outputs found

    Second-order optimality conditions for bilevel programs

    Full text link
    Second-order optimality conditions of the bilevel programming problems are dependent on the second-order directional derivatives of the value functions or the solution mappings of the lower level problems under some regular conditions, which can not be calculated or evaluated. To overcome this difficulty, we propose the notion of the bi-local solution. Under the Jacobian uniqueness conditions for the lower level problem, we prove that the bi-local solution is a local minimizer of some one-level minimization problem. Basing on this property, the first-order necessary optimality conditions and second-order necessary and sufficient optimality conditions for the bi-local optimal solution of a given bilevel program are established. The second-order optimality conditions proposed here only involve second-order derivatives of the defining functions of the bilevel problem. The second-order sufficient optimality conditions are used to derive the Q-linear convergence rate of the classical augmented Lagrangian method

    Federated Neural Architecture Search

    Full text link
    To preserve user privacy while enabling mobile intelligence, techniques have been proposed to train deep neural networks on decentralized data. However, training over decentralized data makes the design of neural architecture quite difficult as it already was. Such difficulty is further amplified when designing and deploying different neural architectures for heterogeneous mobile platforms. In this work, we propose an automatic neural architecture search into the decentralized training, as a new DNN training paradigm called Federated Neural Architecture Search, namely federated NAS. To deal with the primary challenge of limited on-client computational and communication resources, we present FedNAS, a highly optimized framework for efficient federated NAS. FedNAS fully exploits the key opportunity of insufficient model candidate re-training during the architecture search process, and incorporates three key optimizations: parallel candidates training on partial clients, early dropping candidates with inferior performance, and dynamic round numbers. Tested on large-scale datasets and typical CNN architectures, FedNAS achieves comparable model accuracy as state-of-the-art NAS algorithm that trains models with centralized data, and also reduces the client cost by up to two orders of magnitude compared to a straightforward design of federated NAS
    • …
    corecore